nonlinear principal component analysis
Nonlinear Principal Component Analysis with Random Bernoulli Features for Process Monitoring
The process generates substantial amounts of data with highly complex structures, leading to the development of numerous nonlinear statistical methods. However, most of these methods rely on computations involving large-scale dense kernel matrices. This dependence poses significant challenges in meeting the high computational demands and real-time responsiveness required by online monitoring systems. To alleviate the computational burden of dense large-scale matrix multiplication, we incorporate the bootstrap sampling concept into random feature mapping and propose a novel random Bernoulli principal component analysis method to efficiently capture nonlinear patterns in the process. We derive a convergence bound for the kernel matrix approximation constructed using random Bernoulli features, ensuring theoretical robustness. Subsequently, we design four fast process monitoring methods based on random Bernoulli principal component analysis to extend its nonlinear capabilities for handling diverse fault scenarios. Finally, numerical experiments and real-world data analyses are conducted to evaluate the performance of the proposed methods. Results demonstrate that the proposed methods offer excellent scalability and reduced computational complexity, achieving substantial cost savings with minimal performance loss compared to traditional kernel-based approaches.
Auto-associative models, nonlinear Principal component analysis, manifolds and projection pursuit
Girard, Stéphane, Iovleff, Serge
In this paper, auto-associative models are proposed as candidates to the generalization of Principal Component Analysis. We show that these models are dedicated to the approximation of the dataset by a manifold. Here, the word "manifold" refers to the topology properties of the structure. The approximating manifold is built by a projection pursuit algorithm. At each step of the algorithm, the dimension of the manifold is incremented. Some theoretical properties are provided. In particular, we can show that, at each step of the algorithm, the mean residuals norm is not increased. Moreover, it is also established that the algorithm converges in a finite number of steps. Some particular auto-associative models are exhibited and compared to the classical PCA and some neural networks models. Implementation aspects are discussed. We show that, in numerous cases, no optimization procedure is required. Some illustrations on simulated and real data are presented.
- North America > United States > New York (0.05)
- Europe > France (0.04)
- North America > United States > Virginia > Albemarle County > Charlottesville (0.04)
- (6 more...)
- Workflow (0.54)
- Research Report (0.50)
Robust Learning of Chaotic Attractors
Bakker, Rembrandt, Schouten, Jaap C., Coppens, Marc-Olivier, Takens, Floris, Giles, C. Lee, Bleek, Cor M. van den
A fundamental problem with the modeling of chaotic time series data is that minimizing short-term prediction errors does not guarantee a match between the reconstructed attractors of model and experiments. We introduce a modeling paradigm that simultaneously learns to short-tenn predict and to locate the outlines of the attractor by a new way of nonlinear principal component analysis. Closed-loop predictions are constrained to stay within these outlines, to prevent divergence from the attractor. Learning is exceptionally fast: parameter estimation for the 1000 sample laser data from the 1991 Santa Fe time series competition took less than a minute on a 166 MHz Pentium PC.
- Europe > Netherlands > South Holland > Delft (0.05)
- Asia > Middle East > Jordan (0.05)
- North America > United States > New Mexico > Los Alamos County > Los Alamos (0.04)
- Europe > Netherlands > North Brabant > Eindhoven (0.04)
Robust Learning of Chaotic Attractors
Bakker, Rembrandt, Schouten, Jaap C., Coppens, Marc-Olivier, Takens, Floris, Giles, C. Lee, Bleek, Cor M. van den
A fundamental problem with the modeling of chaotic time series data is that minimizing short-term prediction errors does not guarantee a match between the reconstructed attractors of model and experiments. We introduce a modeling paradigm that simultaneously learns to short-tenn predict and to locate the outlines of the attractor by a new way of nonlinear principal component analysis. Closed-loop predictions are constrained to stay within these outlines, to prevent divergence from the attractor. Learning is exceptionally fast: parameter estimation for the 1000 sample laser data from the 1991 Santa Fe time series competition took less than a minute on a 166 MHz Pentium PC.
- Europe > Netherlands > South Holland > Delft (0.05)
- Asia > Middle East > Jordan (0.05)
- North America > United States > New Mexico > Los Alamos County > Los Alamos (0.04)
- Europe > Netherlands > North Brabant > Eindhoven (0.04)
Robust Learning of Chaotic Attractors
Bakker, Rembrandt, Schouten, Jaap C., Coppens, Marc-Olivier, Takens, Floris, Giles, C. Lee, Bleek, Cor M. van den
A fundamental problem with the modeling of chaotic time series data is that minimizing short-term prediction errors does not guarantee a match between the reconstructed attractors of model and experiments. We introduce a modeling paradigm that simultaneously learns to short-tenn predict and to locate the outlines of the attractor by a new way of nonlinear principal component analysis. Closed-loop predictions are constrained to stay within these outlines, to prevent divergence from the attractor. Learning is exceptionally fast: parameter estimation for the 1000 sample laser data from the 1991 Santa Fe time series competition took less than a minute on a 166 MHz Pentium PC.
- Europe > Netherlands > South Holland > Delft (0.05)
- Asia > Middle East > Jordan (0.05)
- North America > United States > New Mexico > Los Alamos County > Los Alamos (0.04)
- Europe > Netherlands > North Brabant > Eindhoven (0.04)